367 research outputs found
Using Simon's Algorithm to Attack Symmetric-Key Cryptographic Primitives
We present new connections between quantum information and the field of
classical cryptography. In particular, we provide examples where Simon's
algorithm can be used to show insecurity of commonly used cryptographic
symmetric-key primitives. Specifically, these examples consist of a quantum
distinguisher for the 3-round Feistel network and a forgery attack on CBC-MAC
which forges a tag for a chosen-prefix message querying only other messages (of
the same length). We assume that an adversary has quantum-oracle access to the
respective classical primitives. Similar results have been achieved recently in
independent work by Kaplan et al. Our findings shed new light on the
post-quantum security of cryptographic schemes and underline that classical
security proofs of cryptographic constructions need to be revisited in light of
quantum attackers.Comment: 14 pages, 2 figures. v3: final polished version, more formal
definitions adde
Quantum Cryptography Beyond Quantum Key Distribution
Quantum cryptography is the art and science of exploiting quantum mechanical
effects in order to perform cryptographic tasks. While the most well-known
example of this discipline is quantum key distribution (QKD), there exist many
other applications such as quantum money, randomness generation, secure two-
and multi-party computation and delegated quantum computation. Quantum
cryptography also studies the limitations and challenges resulting from quantum
adversaries---including the impossibility of quantum bit commitment, the
difficulty of quantum rewinding and the definition of quantum security models
for classical primitives. In this review article, aimed primarily at
cryptographers unfamiliar with the quantum world, we survey the area of
theoretical quantum cryptography, with an emphasis on the constructions and
limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference
Quantum Lazy Sampling and Game-Playing Proofs for Quantum Indifferentiability
Game-playing proofs constitute a powerful framework for non-quantum
cryptographic security arguments, most notably applied in the context of
indifferentiability. An essential ingredient in such proofs is lazy sampling of
random primitives. We develop a quantum game-playing proof framework by
generalizing two recently developed proof techniques. First, we describe how
Zhandry's compressed quantum oracles~(Crypto'19) can be used to do quantum lazy
sampling of a class of non-uniform function distributions. Second, we observe
how Unruh's one-way-to-hiding lemma~(Eurocrypt'14) can also be applied to
compressed oracles, providing a quantum counterpart to the fundamental lemma of
game-playing. Subsequently, we use our game-playing framework to prove quantum
indifferentiability of the sponge construction, assuming a random internal
function
Robust Cryptography in the Noisy-Quantum-Storage Model
It was shown in [WST08] that cryptographic primitives can be implemented
based on the assumption that quantum storage of qubits is noisy. In this work
we analyze a protocol for the universal task of oblivious transfer that can be
implemented using quantum-key-distribution (QKD) hardware in the practical
setting where honest participants are unable to perform noise-free operations.
We derive trade-offs between the amount of storage noise, the amount of noise
in the operations performed by the honest participants and the security of
oblivious transfer which are greatly improved compared to the results in
[WST08]. As an example, we show that for the case of depolarizing noise in
storage we can obtain secure oblivious transfer as long as the quantum
bit-error rate of the channel does not exceed 11% and the noise on the channel
is strictly less than the quantum storage noise. This is optimal for the
protocol considered. Finally, we show that our analysis easily carries over to
quantum protocols for secure identification.Comment: 34 pages, 2 figures. v2: clarified novelty of results, improved
security analysis using fidelity-based smooth min-entropy, v3: typos and
additivity proof in appendix correcte
Multi-party zero-error classical channel coding with entanglement
We study the effects of quantum entanglement on the performance of two
classical zero-error communication tasks among multiple parties. Both tasks are
generalizations of the two-party zero-error channel-coding problem, where a
sender and a receiver want to perfectly communicate messages through a one-way
classical noisy channel. If the two parties are allowed to share entanglement,
there are several positive results that show the existence of channels for
which they can communicate strictly more than what they could do with classical
resources. In the first task, one sender wants to communicate a common message
to multiple receivers. We show that if the number of receivers is greater than
a certain threshold then entanglement does not allow for an improvement in the
communication for any finite number of uses of the channel. On the other hand,
when the number of receivers is fixed, we exhibit a class of channels for which
entanglement gives an advantage. The second problem we consider features
multiple collaborating senders and one receiver. Classically, cooperation among
the senders might allow them to communicate on average more messages than the
sum of their individual possibilities. We show that whenever a channel allows
single-sender entanglement-assisted advantage, then the gain extends also to
the multi-sender case. Furthermore, we show that entanglement allows for a
peculiar amplification of information which cannot happen classically, for a
fixed number of uses of a channel with multiple senders.Comment: Some proofs have been modifie
Semantic Security and Indistinguishability in the Quantum World
At CRYPTO 2013, Boneh and Zhandry initiated the study of quantum-secure
encryption. They proposed first indistinguishability definitions for the
quantum world where the actual indistinguishability only holds for classical
messages, and they provide arguments why it might be hard to achieve a stronger
notion. In this work, we show that stronger notions are achievable, where the
indistinguishability holds for quantum superpositions of messages. We
investigate exhaustively the possibilities and subtle differences in defining
such a quantum indistinguishability notion for symmetric-key encryption
schemes. We justify our stronger definition by showing its equivalence to novel
quantum semantic-security notions that we introduce. Furthermore, we show that
our new security definitions cannot be achieved by a large class of ciphers --
those which are quasi-preserving the message length. On the other hand, we
provide a secure construction based on quantum-resistant pseudorandom
permutations; this construction can be used as a generic transformation for
turning a large class of encryption schemes into quantum indistinguishable and
hence quantum semantically secure ones. Moreover, our construction is the first
completely classical encryption scheme shown to be secure against an even
stronger notion of indistinguishability, which was previously known to be
achievable only by using quantum messages and arbitrary quantum encryption
circuits.Comment: 37 pages, 2 figure
The operational meaning of min- and max-entropy
We show that the conditional min-entropy Hmin(A|B) of a bipartite state
rho_AB is directly related to the maximum achievable overlap with a maximally
entangled state if only local actions on the B-part of rho_AB are allowed. In
the special case where A is classical, this overlap corresponds to the
probability of guessing A given B. In a similar vein, we connect the
conditional max-entropy Hmax(A|B) to the maximum fidelity of rho_AB with a
product state that is completely mixed on A. In the case where A is classical,
this corresponds to the security of A when used as a secret key in the presence
of an adversary holding B. Because min- and max-entropies are known to
characterize information-processing tasks such as randomness extraction and
state merging, our results establish a direct connection between these tasks
and basic operational problems. For example, they imply that the (logarithm of
the) probability of guessing A given B is a lower bound on the number of
uniform secret bits that can be extracted from A relative to an adversary
holding B.Comment: 12 pages, v2: no change in content, some typos corrected (including
the definition of fidelity in footnote 8), now closer to the published
versio
Quantifying the Leakage of Quantum Protocols for Classical Two-Party Cryptography
We study quantum protocols among two distrustful parties. By adopting a
rather strict definition of correctness - guaranteeing that honest players
obtain their correct outcomes only - we can show that every strictly correct
quantum protocol implementing a non-trivial classical primitive necessarily
leaks information to a dishonest player. This extends known impossibility
results to all non-trivial primitives. We provide a framework for quantifying
this leakage and argue that leakage is a good measure for the privacy provided
to the players by a given protocol. Our framework also covers the case where
the two players are helped by a trusted third party. We show that despite the
help of a trusted third party, the players cannot amplify the cryptographic
power of any primitive. All our results hold even against quantum
honest-but-curious adversaries who honestly follow the protocol but purify
their actions and apply a different measurement at the end of the protocol. As
concrete examples, we establish lower bounds on the leakage of standard
universal two-party primitives such as oblivious transfer.Comment: 38 pages, completely supersedes arXiv:0902.403
Transmission Network Reduction Method using Nonlinear Optimization
This paper presents a new method to determine the susceptances of a reduced
transmission network representation by using nonlinear optimization. We use
Power Transfer Distribution Factors (PTDFs) to convert the original grid into a
reduced version, from which we determine the susceptances. From our case
studies we find that considering a reduced injection-independent evaluated PTDF
matrix is the best approximation and is by far better than an
injection-dependent evaluated PTDF matrix over a given set of
arbitrarily-chosen power injection scenarios. We also compare our nonlinear
approach with existing methods from literature in terms of the approximation
error and computation time. On average, we find that our approach reduces the
mean error of the power flow deviations between the original power system and
its reduced version, while achieving higher but reasonable computation times
- …